9 research outputs found

    Quantum Computation, Markov Chains and Combinatorial Optimisation

    Get PDF
    This thesis addresses two questions related to the title, Quantum Computation, Markov Chains and Combinatorial Optimisation. The first question involves an algorithmic primitive of quantum computation, quantum walks on graphs, and its relation to Markov Chains. Quantum walks have been shown in certain cases to mix faster than their classical counterparts. Lifted Markov chains, consisting of a Markov chain on an extended state space which is projected back down to the original state space, also show considerable speedups in mixing time. We design a lifted Markov chain that in some sense simulates any quantum walk. Concretely, we construct a lifted Markov chain on a connected graph G with n vertices that mixes exactly to the average mixing distribution of a quantum walk on G. Moreover, the mixing time of this chain is the diameter of G. We then consider practical consequences of this result. In the second part of this thesis we address a classic unsolved problem in combinatorial optimisation, graph isomorphism. A theorem of Kozen states that two graphs on n vertices are isomorphic if and only if there is a clique of size n in the weak modular product of the two graphs. Furthermore, a straightforward corollary of this theorem and Lovász’s sandwich theorem is if the weak modular product between two graphs is perfect, then checking if the graphs are isomorphic is polynomial in n. We enumerate the necessary and sufficient conditions for the weak modular product of two simple graphs to be perfect. Interesting cases include complete multipartite graphs and disjoint unions of cliques. We find that all perfect weak modular products have factors that fall into classes of graphs for which testing isomorphism is already known to be polynomial in the number of vertices. Open questions and further research directions are discussed

    Optimal Stopping with Gaussian Processes

    Full text link
    We propose a novel group of Gaussian Process based algorithms for fast approximate optimal stopping of time series with specific applications to financial markets. We show that structural properties commonly exhibited by financial time series (e.g., the tendency to mean-revert) allow the use of Gaussian and Deep Gaussian Process models that further enable us to analytically evaluate optimal stopping value functions and policies. We additionally quantify uncertainty in the value function by propagating the price model through the optimal stopping analysis. We compare and contrast our proposed methods against a sampling-based method, as well as a deep learning based benchmark that is currently considered the state-of-the-art in the literature. We show that our family of algorithms outperforms benchmarks on three historical time series datasets that include intra-day and end-of-day equity stock prices as well as the daily US treasury yield curve rates

    A Canonical Data Transformation for Achieving Inter- and Within-group Fairness

    Full text link
    Increases in the deployment of machine learning algorithms for applications that deal with sensitive data have brought attention to the issue of fairness in machine learning. Many works have been devoted to applications that require different demographic groups to be treated fairly. However, algorithms that aim to satisfy inter-group fairness (also called group fairness) may inadvertently treat individuals within the same demographic group unfairly. To address this issue, we introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group. We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy. The framework maps the feature vectors of members from different groups to an inter-group-fair canonical domain before feeding them into a scoring function. The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness. We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance in achieving inter-group and within-group fairness to two regularization-based methods

    On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

    Full text link
    Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations. These classes of approaches have been largely studied independently and the few attempts at reconciling them have been primarily empirical. This work establishes a clear theoretical connection between game-theoretic feature attributions, focusing on but not limited to SHAP, and counterfactuals explanations. After motivating operative changes to Shapley values based feature attributions and counterfactual explanations, we prove that, under conditions, they are in fact equivalent. We then extend the equivalency result to game-theoretic solution concepts beyond Shapley values. Moreover, through the analysis of the conditions of such equivalence, we shed light on the limitations of naively using counterfactual explanations to provide feature importances. Experiments on three datasets quantitatively show the difference in explanations at every stage of the connection between the two approaches and corroborate the theoretical findings.Comment: Accepted at AIES 202

    Optimal Admission Control for Multiclass Queues with Time-Varying Arrival Rates via State Abstraction

    No full text
    We consider a novel queuing problem where the decision-maker must choose to accept or reject randomly arriving tasks into a no buffer queue which are processed by N identical servers. Each task has a price, which is a positive real number, and a class. Each class of task has a different price distribution, service rate, and arrives according to an inhomogenous Poisson process. The objective is to decide which tasks to accept so that the total price of tasks processed is maximised over a finite horizon. We formulate the problem using a discrete time Markov Decision Process (MDP) with a hybrid state space. We show that the optimal value function has a specific structure, which enables us to solve the hybrid MDP exactly. Moreover, we rigorously prove that as the gap between successive decision epochs grows smaller, the discrete time solution approaches the optimal solution to the original continuous time problem. To improve the scalability of our approach to a greater number of servers and task classes, we present an approximation based on state abstraction. We validate our approach on synthetic data, as well as a real financial fraud data set, which is the motivating application for this work
    corecore